The pytest
framework are one of the most advanced & flexible testing frameworks available in Python
. They can be used for your smallest to largest testing needs.
It also supports third party plugins and there are litterally hundreds to them. The consolidated list of plugins can be found at https://pytest.readthedocs.io/en/2.7.3/plugins_index/index.html.
An example of a simple test:
from app import increment
def test_will_fail():
"""
"increment(3)" should return 4, but since we are
evaluating it against 5, the test should fail.
"""
assert increment(3) == 5
def test_will_pass():
"""."""
assert increment(4) == 5
pytest
is not part of default installation of python and needs to be manually installed, which can be done by executing following command
pip install -U pytest --user
Lets look at the above example of pytest
based test file.
# -*- coding: utf-8 -*-
"""
Created on Fri May 12 04:16:14 2017.
@author: johri_m.
"""
def increment(x):
"""."""
return x + 1
def test_will_fail():
"""."""
assert increment(3) == 5
def test_will_pass():
"""."""
assert increment(4) == 5
In the above code, increment
is the function under test. and test_will_fail
and test_will_pass
are the test methods.
Note
Both the test methods name start withtest
. So all test functions should have name start withtest
.
pytest
when executed without options will search for all the tests in current folder and all subfolders and execute them.
$ pytest
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest/basics, inifile:
plugins: bdd-2.20.0, allure-pytest-2.3.2b1
collected 14 items
python_test.py . [ 7%]
test_basic_1.py F. [ 21%]
test_basics_1_1.py .F... [ 57%]
test_class_1.py .F. [ 78%]
test_class_2.py .. [ 92%]
test_fixture_1.py . [100%]
=================================== FAILURES ===================================
________________________________ test_will_fail ________________________________
def test_will_fail():
"""."""
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
test_basic_1.py:16: AssertionError
______________________ TestClass.test_validate_attr_wrong ______________________
self = <test_basics_1_1.TestClass object at 0x7f9c0a58db00>
def test_validate_attr_wrong(self):
x = User
> assert hasattr(x, 'fullname')
E AssertionError: assert False
E + where False = hasattr(<class 'test_basics_1_1.User'>, 'fullname')
test_basics_1_1.py:26: AssertionError
______________________ TestClass.test_validate_attr_wrong ______________________
self = <test_class_1.TestClass object at 0x7f9c0a30b358>
def test_validate_attr_wrong(self):
x = User
> assert hasattr(x, 'fullname')
E AssertionError: assert False
E + where False = hasattr(<class 'test_class_1.User'>, 'fullname')
test_class_1.py:26: AssertionError
$ pytest basic_1.py
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest/basics, inifile:
plugins: bdd-2.20.0
collected 2 items
basic_1.py F. [100%]
=================================== FAILURES ===================================
________________________________ test_will_fail ________________________________
def test_will_fail():
"""."""
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
basic_1.py:16: AssertionError
====================== 1 failed, 1 passed in 0.03 seconds ======================
As you can see that, pytest scanned the folder for all the files which contains the text test
, and then it searched for all the functions with text test
in their names.
After it has found all the tests to execute, it starts the execute and proide the execution details. Once all the tests are executed, it provides a summary of all the testcases which failed and also provides the assert details of them.
test_basic_1.py F. [ 21%]
test_basics_1_1.py .F... [ 57%]
test_class_1.py .F. [ 78%]
If you look at the summary, you will find that pytest
has marked (by "F
") all the files in which tests have failed.
pytest
provides lots of command-line
options, here we are going to cover few of the most comman onces later in the chapter.
pytest -h
provides a summary of all the command-line
options available.
In [ ]:
$ pytest -q basic_1.py
F. [100%]
=================================== FAILURES ===================================
________________________________ test_will_fail ________________________________
def test_will_fail():
"""."""
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
basic_1.py:16: AssertionError
1 failed, 1 passed in 0.02 seconds
$ python -m pytest basic_1.py
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest/basics, inifile:
plugins: bdd-2.20.0
collected 2 items
basic_1.py F. [100%]
=================================== FAILURES ===================================
________________________________ test_will_fail ________________________________
def test_will_fail():
"""."""
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
basic_1.py:16: AssertionError
====================== 1 failed, 1 passed in 0.02 seconds ======================
Allows for compact test suites
The idioms that
pytest
first introduced brought a change in the Python community because they made it possible for test suites to be written in a very compact style, or at least far more compact than was ever possible before. Pytest basically introduced the concept that Python tests should be plain Python functions instead of forcing developers to include their tests inside large test classes.
Minimal boilerplate
Tests written with
pytest
need very little boilerplate code, which makes them easy to write and understand.
Tests parametrization
You can parametrize any test and cover all uses of a unit without code duplication.
Very pretty and useful failure information
Pytest
rewrites your test so that it can store all intermediate values that can lead to failingassert
and provides you with very pretty explanation about what has been asserted and what have failed.
fixture
's are simple and easy to use
A
fixture
is just a function that returns a value and to use a fixture you just have to add an argument to your test function. You can also use a fixture from another fixture in the same manner, so it's easy to make them modular. You can also parametrize fixture and every test that uses it will run with all values of parameters, no test rewrite needed. If your test uses several fixtures, all parameters' combinations will be covered.
Pdb just works
Pytest
**automagically**
(and safely) disables output capturing when you're entering pdb, so you don't have to redirect debugger to other console or bear huge amount of unneeded output from other tests.
Test discovery by file-path
Over 150 plugins
more than 150 plugins to customise py.test such as pytest-BDD and pytest-konira for writing tests for Behaviour Driven Testing
The fact that pytest uses it's own special routines to write tests means that you are trading convenience for compatibility. In other words, writing tests for pytest means that you are tying yourself to only pytest and the only way to use another testing framework is to rewrite most of the code.
pytest -h
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments:
file_or_dir
general:
-k EXPRESSION only run tests which match the given substring
expression. An expression is a python evaluatable
expression where all names are substring-matched
against test names and their parent classes. Example:
-k 'test_method or test_other' matches all test
functions and classes whose name contains
'test_method' or 'test_other', while -k 'not
test_method' matches those that don't contain
'test_method' in their names. Additionally keywords
are matched to classes and functions containing extra
names in their 'extra_keyword_matches' set, as well as
functions which have names assigned directly to them.
-m MARKEXPR only run tests matching given mark expression.
example: -m 'mark1 and not mark2'.
--markers show markers (builtin, plugin and per-project ones).
-x, --exitfirst exit instantly on first error or failed test.
--maxfail=num exit after first num failures or errors.
--strict marks not registered in configuration file raise
errors.
-c file load configuration from `file` instead of trying to
locate one of the implicit configuration files.
--continue-on-collection-errors
Force test execution even if collection errors occur.
--rootdir=ROOTDIR Define root directory for tests. Can be relative path:
'root_dir', './root_dir', 'root_dir/another_dir/';
absolute path: '/home/user/root_dir'; path with
variables: '$HOME/root_dir'.
--fixtures, --funcargs
show available fixtures, sorted by plugin appearance
--fixtures-per-test show fixtures per test
--import-mode={prepend,append}
prepend/append to sys.path when importing test
modules, default is to prepend.
--pdb start the interactive Python debugger on errors.
--pdbcls=modulename:classname
start a custom interactive Python debugger on errors.
For example:
--pdbcls=IPython.terminal.debugger:TerminalPdb
--capture=method per-test capturing method: one of fd|sys|no.
-s shortcut for --capture=no.
--runxfail run tests even if they are marked xfail
--lf, --last-failed rerun only the tests that failed at the last run (or
all if none failed)
--ff, --failed-first run all tests but run the last failures first. This
may re-order tests and thus lead to repeated fixture
setup/teardown
--nf, --new-first run tests from new files first, then the rest of the
tests sorted by file mtime
--cache-show show cache contents, don't perform collection or tests
--cache-clear remove all cache contents at start of test run.
--lfnf={all,none}, --last-failed-no-failures={all,none}
change the behavior when no test failed in the last
run or no information about the last failures was
found in the cache
reporting:
-v, --verbose increase verbosity.
-q, --quiet decrease verbosity.
--verbosity=VERBOSE set verbosity
-r chars show extra test summary info as specified by chars
(f)ailed, (E)error, (s)skipped, (x)failed, (X)passed,
(p)passed, (P)passed with output, (a)all except pP.
Warnings are displayed at all times except when
--disable-warnings is set
--disable-warnings, --disable-pytest-warnings
disable warnings summary
-l, --showlocals show locals in tracebacks (disabled by default).
--tb=style traceback print mode (auto/long/short/line/native/no).
--show-capture={no,stdout,stderr,log,all}
Controls how captured stdout/stderr/log is shown on
failed tests. Default is 'all'.
--full-trace don't cut any tracebacks (default is to cut).
--color=color color terminal output (yes/no/auto).
--durations=N show N slowest setup/test durations (N=0 for all).
--pastebin=mode send failed|all info to bpaste.net pastebin service.
--junit-xml=path create junit-xml style report file at given path.
--junit-prefix=str prepend prefix to classnames in junit-xml output
--result-log=path DEPRECATED path for machine-readable result log.
--gherkin-terminal-reporter
enable gherkin output
collection:
--collect-only only collect tests, don't execute them.
--pyargs try to interpret all arguments as python packages.
--ignore=path ignore path during collection (multi-allowed).
--deselect=nodeid_prefix
deselect item during collection (multi-allowed).
--confcutdir=dir only load conftest.py's relative to specified dir.
--noconftest Don't load any conftest.py files.
--keep-duplicates Keep duplicate tests.
--collect-in-virtualenv
Don't ignore tests in a local virtualenv directory
--doctest-modules run doctests in all .py modules
--doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
choose another output format for diffs on doctest
failure
--doctest-glob=pat doctests file matching pattern, default: test*.txt
--doctest-ignore-import-errors
ignore doctest ImportErrors
--doctest-continue-on-failure
for a given doctest, continue to run after the first
failure
test session debugging and configuration:
--basetemp=dir base temporary directory for this test run.
--version display pytest lib version and import information.
-h, --help show help message and configuration info
-p name early-load given plugin (multi-allowed). To avoid
loading of plugins, use the `no:` prefix, e.g.
`no:doctest`.
--trace-config trace considerations of conftest.py files.
--debug store internal tracing debug information in
'pytestdebug.log'.
-o OVERRIDE_INI, --override-ini=OVERRIDE_INI
override ini option with "option=value" style, e.g.
`-o xfail_strict=True -o cache_dir=cache`.
--assert=MODE Control assertion debugging tools. 'plain' performs no
assertion debugging. 'rewrite' (the default) rewrites
assert statements in test modules on import to provide
assert expression information.
--setup-only only setup fixtures, do not execute tests.
--setup-show show setup of fixtures while executing tests.
--setup-plan show what fixtures and tests would be executed but
don't execute anything.
pytest-warnings:
-W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
set which warnings to report, see -W option of python
itself.
logging:
--no-print-logs disable printing caught logs on failed tests.
--log-level=LOG_LEVEL
logging level used by the logging module
--log-format=LOG_FORMAT
log format as used by the logging module.
--log-date-format=LOG_DATE_FORMAT
log date format as used by the logging module.
--log-cli-level=LOG_CLI_LEVEL
cli logging level.
--log-cli-format=LOG_CLI_FORMAT
log format as used by the logging module.
--log-cli-date-format=LOG_CLI_DATE_FORMAT
log date format as used by the logging module.
--log-file=LOG_FILE path to a file when logging will be written to.
--log-file-level=LOG_FILE_LEVEL
log file logging level.
--log-file-format=LOG_FILE_FORMAT
log format as used by the logging module.
--log-file-date-format=LOG_FILE_DATE_FORMAT
log date format as used by the logging module.
Cucumber JSON:
--cucumber-json=path create cucumber json style report file at given path.
--cucumber-json-expanded
expand scenario outlines into scenarios and fill in
the step names
--generate-missing Generate missing bdd test code for given feature files
and exit.
--feature=FILE_OR_DIR
Feature file or directory to generate missing code
for. Multiple allowed.
[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:
markers (linelist) markers for test functions
empty_parameter_set_mark (string) default marker for empty parametersets
norecursedirs (args) directory patterns to avoid for recursion
testpaths (args) directories to search for tests when no files or dire
console_output_style (string) console output: classic or with additional progr
usefixtures (args) list of default fixtures to be used with this project
python_files (args) glob-style file patterns for Python test module disco
python_classes (args) prefixes or glob names for Python test class discover
python_functions (args) prefixes or glob names for Python test function and m
xfail_strict (bool) default for the strict parameter of xfail markers whe
junit_suite_name (string) Test suite name for JUnit report
junit_logging (string) Write captured log messages to JUnit report: one of n
doctest_optionflags (args) option flags for doctests
doctest_encoding (string) encoding used for doctest files
cache_dir (string) cache directory path.
filterwarnings (linelist) Each line specifies a pattern for warnings.filterwar
log_print (bool) default value for --no-print-logs
log_level (string) default value for --log-level
log_format (string) default value for --log-format
log_date_format (string) default value for --log-date-format
log_cli (bool) enable log display during test run (also known as "li
log_cli_level (string) default value for --log-cli-level
log_cli_format (string) default value for --log-cli-format
log_cli_date_format (string) default value for --log-cli-date-format
log_file (string) default value for --log-file
log_file_level (string) default value for --log-file-level
log_file_format (string) default value for --log-file-format
log_file_date_format (string) default value for --log-file-date-format
addopts (args) extra command line options
minversion (string) minimally required pytest version
environment variables:
PYTEST_ADDOPTS extra command line options
PYTEST_PLUGINS comma-separated plugins to load during startup
PYTEST_DEBUG set to enable debug tracing of pytest's internals
to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
$ pytest -v basic_1.py
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0 -- /usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest/basics, inifile:
plugins: bdd-2.20.0
collected 2 items
basic_1.py::test_will_fail FAILED [ 50%]
basic_1.py::test_will_pass PASSED [100%]
=============================================================================== FAILURES ===============================================================================
____________________________________________________________________________ test_will_fail ____________________________________________________________________________
def test_will_fail():
"""."""
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
basic_1.py:16: AssertionError
================================================================== 1 failed, 1 passed in 0.02 seconds ==================================================================
$ pytest -q basic_1.py
F. [100%]
=============================================================================== FAILURES ===============================================================================
____________________________________________________________________________ test_will_fail ____________________________________________________________________________
def test_will_fail():
"""."""
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
basic_1.py:16: AssertionError
1 failed, 1 passed in 0.02 seconds
It is shortcut for --capture=no
. What that means is, all the print
outputs will be displayed on the stdout
console.
pytest -s test_basic_1.py
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest/basics, inifile:
plugins: bdd-2.20.0, allure-pytest-2.3.2b1
collected 2 items
test_basic_1.py testing 3+1
F.
=================================== FAILURES ===================================
________________________________ test_will_fail ________________________________
def test_will_fail():
"""."""
print("testing 3+1")
> assert increment(3) == 5
E assert 4 == 5
E + where 4 = increment(3)
test_basic_1.py:17: AssertionError
====================== 1 failed, 1 passed in 0.04 seconds ======================
In the above example testing 3+1
is getting printed on the console, which is output of a print command print("testing 3+1")
It collects all the testcases which can be executed and provides all the testcases it can execute. This is a good step to find how many testcases fulfill certain condition
pytest --collect-only
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest, inifile:
plugins: bdd-2.20.0
collected 88 items
<Module 'basics/python_test.py'>
<Class 'Test_PythonHomePage'>
<Instance '()'>
<Function 'test_search'>
<Module 'fixtures/test_1.py'>
<Function 'test_fixture_contents'>
<Function 'test_try_to_break_the_fixture_1'>
<Function 'test_try_to_break_the_fixture_2'>
<Function 'test_try_to_break_the_module_fixture_1'>
<Function 'test_try_to_break_the_module_fixture_2'>
- - - - -
<Module 'fixtures/test_scope_1.py'>
<UnitTestCase 'MyTest'>
<TestCaseFunction 'test_method1'>
<TestCaseFunction 'test_method2'>
<Module 'fixtures/test_scope_function.py'>
<UnitTestCase 'MyTest'>
<TestCaseFunction 'test_method1'>
<TestCaseFunction 'test_method2'>
<Module 'my magic/test_5.py'>
<Function 'test_foo[FOO-1-1]'>
<Function 'test_foo[FOO-2--3]'>
Running only on a directory
pytest --collect-only my\ magic/
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest, inifile:
plugins: bdd-2.20.0
collected 2 items
<Module 'my magic/test_5.py'>
<Function 'test_foo[FOO-1-1]'>
<Function 'test_foo[FOO-2--3]'>
pytest allows to launch pdb
on first or for
initial specific number of failures by using following command option
pytest --pdb # launch pdb on every failure
pytest -x --pdb # launch pdb on first failure, then end test session
pytest --pdb --maxfail=4 # launch pdb for first 4 failures
pytest --durations=5
The below command line will return the list of 5 slowest test executions.
Pytest allows to select marked testcases to execute as shown in the below example
pytest -m staging
The above command will execute all the testcases
marked with decorator @pytest.mark.staging
$ pytest --junitxml="report.xml"
===================== test session starts ========================
platform linux -- Python 3.6.1, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest, inifile:
plugins: bdd-2.20.0, allure-pytest-2.3.2b1
collected 88 items
basics/python_test.py E [ 1%]
fixtures/test_1.py ....F [ 6%]
fixtures/test_2.py . [ 7%]
fixtures/test_basic_1.py . [ 9%]
fixtures/test_basics_0.py . [ 10%]
fixtures/test_basics_1.py .. [ 12%]
fixtures/test_basics_1_1.py .. [ 14%]
fixtures/test_basics_2.py .. [ 17%]
fixtures/test_mark_1.py ... [ 20%]
fixtures/test_mark_2.py ... [ 23%]
fixtures/test_pyfixture_1.py ..F [ 27%]
fixtures/test_pyfixture_2.py .. [ 29%]
fixtures/test_pyfixture_3.py ('args', ['var0', 'var1'], 'x', 1)
.('args', ['var0', 'var1'], 'x', 2)
. [ 31%]
fixtures/test_pyfixture_parameter_1.py .................. [ 52%]
fixtures/test_pyfixture_parameter_1_1.py .................. [ 72%]
fixtures/test_pyfixture_parameter_2.py .................. [ 93%]
fixtures/test_scope_1.py FF [ 95%]
fixtures/test_scope_function.py Fs [ 97%]
my magic/test_5.py FF [100%]
============================ ERRORS ===================
__________ ERROR at setup of Test_PythonHomePage.test_search __________
cls = <class 'python_test.Test_PythonHomePage'>
@classmethod
def setup_class(cls):
""" setup any state specific to the execution of the given class (which
usually contains tests).
"""
> with open("config.yaml", 'r') as stream:
E FileNotFoundError: [Errno 2] No such file or directory: 'config.yaml'
basics/python_test.py:10: FileNotFoundError
============================= FAILURES ===============
__________________________ test_try_to_break_the_module_fixture_2 ________________________
i_also_set_things_up = {'status': 'doing fine'}
def test_try_to_break_the_module_fixture_2(i_also_set_things_up):
> assert i_also_set_things_up['flashing'] == "dicts can't flash!"
E KeyError: 'flashing'
fixtures/test_1.py:28: KeyError
-------------------- Captured stdout setup ----------
Dummy DB Creation
_______________________ MyTest.test_method2 _________
self = <test_scope_1.MyTest testMethod=test_method2>
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <test_scope_1.db_class.<locals>.DummyDB object at 0x7f05795ab9b0>
E assert 0
fixtures/test_scope_1.py:37: AssertionError
------------ generated xml file: /home/mayank/code/mj/ebooks/python/lep/Section 2 - Advance Python/Chapter S2.08 - Automated Testing/code/pytest/report.xml ------------
================ 7 failed, 79 passed, 1 skipped, 1 error in 0.36 seconds ========
Fixture is one of the most important concept in pytest
. They have taken the concept of setup
and teardown
to next level by adding many customization such as
Also, they can be used to provide baseline on which tests can be repeatedly executed reliably.
fixture
's have explicit names and are activated by declaring them in test functions
, modules
, classes
or whole projects
fixture
's are modular
, and each fixture triggers
a fixture
function which can use other fixtures
fixture
's and tests according to configuration and component options, or to re-use fixture
's across class
, module
or whole test session
scopesAny method can be marked as fixture by adding @pytest.fixture
gecorator to it.
In [1]:
import pytest
@pytest.fixture()
def my_fixture():
print ("This is a fixture")
def test_my_fixture(my_fixture):
print ("I'm the test")
In [4]:
#pyfixture_1.py
import pytest
import pytest
@pytest.fixture()
def my_fixture():
print ("This is a fixture")
def test_my_fixture(my_fixture):
print ("I'm the test")
@pytest.fixture
def tester(request):
"""Create tester object"""
print(request.param)
return MyTester(request.param)
class TestIt:
@pytest.mark.parametrize('tester', [['var1', 'var2']], indirect=True)
def test_tc1(self, tester):
tester.dothis()
assert 1
pytest test_mod.py # run tests in module
pytest somepath # run all tests below somepath
pytest -k stringexpr # only run tests with names that match the
# "string expression", e.g. "MyClass and not method"
# will select TestMyClass.test_something
# but not TestMyClass.test_method_simple
pytest test_mod.py::test_func # only run tests that match the "node ID",
# e.g. "test_mod.py::test_func" will select
# only test_func in test_mod.py
pytest test_mod.py::TestClass::test_method # run a single method in
# a single class
pytest will run all files in the current directory and its subdirectories of the form test_.py or _test.py. More generally, it follows standard test discovery rules.
pytest somepath
In [8]:
# content of test_sysexit.py
import pytest
def f():
raise SystemExit(1)
def test_mytest():
with pytest.raises(SystemExit):
f()
In [10]:
class TestClass:
def test_one(self):
x = "this"
assert 'h' in x
def test_two(self):
x = "hello"
assert hasattr(x, 'check')
pytest implements the following standard test discovery:
For examples of how to customize your test discovery Changing standard (Python) test discovery.
Within Python modules, pytest also discovers tests using the standard unittest.TestCase subclassing technique.
pytest supports two common test layouts:
Putting tests into an extra directory outside your actual application code might be useful if you have many functional tests or for other reasons want to keep tests separate from actual application code (often a good idea):
setup.py
mypkg/
__init__.py
app.py
view.py
tests/
test_app.py
test_view.py
...
If you need to have test modules with the same name, you might add init.py files to your tests folder and subfolders, changing them to packages:
setup.py
mypkg/
...
tests/
__init__.py
foo/
__init__.py
test_view.py
bar/
__init__.py
test_view.py
In this situation, it is strongly suggested to use a src layout where application root package resides in a sub-directory of your root:
setup.py
src/
mypkg/
__init__.py
app.py
view.py
tests/
__init__.py
foo/
__init__.py
test_view.py
bar/
__init__.py
test_view.py
Inlining test directories into your application package is useful if you have direct relation between tests and application modules and want to distribute them along with your application:
setup.py
mypkg/
__init__.py
app.py
view.py
test/
__init__.py
test_app.py
test_view.py
...
In this scheme, it is easy to your run tests using the --pyargs option:
pytest --pyargs mypkg
pytest will discover where mypkg is installed and collect tests from there.
Note that this layout also works in conjunction with the src layout mentioned in the previous section.
You can invoke pytest from Python code directly:
pytest.main()
this acts as if you would call “pytest” from the command line. It will not raise SystemExit but return the exitcode instead. You can pass in options and arguments:
pytest.main(['-x', 'mytestdir'])